2 results
Transparency of reporting in CALL meta-analyses between 2003 and 2015
- Huifen Lin, Tsuiping Chen, Hsien-Chin Liou
-
- Article
- Export citation
-
Since its introduction by Glass in the 1970s, meta-analysis has become a widely accepted and the most preferred approach to conducting research synthesis. Overcoming the weaknesses commonly associated with traditional narrative review and vote counting, meta-analysis is a statistical method of systematically aggregating and analyzing empirical studies by following well-established procedures. The findings of a meta-analysis, when appropriately conducted, are able to inform important policy decisions and provide practical pedagogical suggestions. With the growing number of publications employing meta-analysis across a wide variety of disciplines, it has received criticism due to its inconsistent findings derived from multiple meta-analyses in the same research domain. These inconsistencies have arisen partly due to the alternatives available to meta-analysts in each major meta-analytic procedure. Researchers have therefore recommended transparent reporting on the decision-making for every essential judgment call so that the results across multiple meta-analyses become replicable, consistent, and interpretable. This research explored the degree to which meta-analyses in the computer-assisted language learning (CALL) discipline transparently reported their decisions in every critical step. To achieve this aim, we retrieved 15 eligible meta-analyses in CALL published between 2003 and 2015. Features of these meta-analyses were extracted based on a codebook modified from Cooper (2003) and Aytug, Rothstein, Zhou and Kern (2012). A transparency score of reporting was then calculated to examine the degree to which these meta-analyses are compliant with the norms of reporting as recommended in the literature. We then discuss the strengths and weaknesses of the methodologies and provide suggestions for conducting quality meta-analyses in this domain.
Computer-mediated communication (CMC) in L2 oral proficiency development: A meta-analysis
- Huifen Lin
-
- Article
- Export citation
-
The ever growing interest in the development of foreign or second (L2) oral proficiency in a computer-mediated communication (CMC) classroom has resulted in a large body of studies looking at both the direct and indirect effects of CMC interventions on the acquisition of oral competences. The present study employed a quantitative meta-analytic approach to investigate such effects by synthesizing (quasi)experimental studies that provide empirical quantitative data for effect size calculation. A literature search located 25 relevant studies for the final analysis. Each study was independently coded for learner, design and publication characteristics. The averaged effect size was estimated from the included studies. The results of the meta-analysis reveal that communication mediated by computer/technologies produced a moderate positive effect on L2 learners’ oral proficiency compared to face-to-face (F2F) communication or no interaction. Furthermore, CMC has roughly similar effect on pronunciation, lexical and syntactic level of oral production; however, it might have a negative impact on fluency and accuracy. This meta-analysis also found that the effect of CMC on oral proficiency depends on several methodological factors such as task type, outcome measurement, treatment length, and assessment task. Major findings of the current meta-analysis include: (1) studies relying on elicited data are superior to those utilizing naturalistic data; (2) reading aloud seems to be the task that could elicit the best oral performance from students; (3) surprisingly, CMC appeared to be harmful for accuracy and fluency; (4) studies that employed decision-making generated the largest effect size, followed by studies that used more than one task type; (5) among the four tasks, jigsaw actually generated a negative effect on oral performance; and (6) as the most popular task employed by primary researchers, opinion-exchange studies produced the smallest effect size. These findings need to be interpreted as exploratory rather than confirmatory since each of them became less trustworthy after taking into consideration numerous other factors such as CMC task and the particular CMC tool used, etc. Future research suggestions are provided and the limitations of this meta-analysis are addressed.